298 research outputs found

    Issues in knowledge representation to support maintainability: A case study in scientific data preparation

    Get PDF
    Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages). A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and runtime estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks. Because the scientific data processing modules (called fittings) evolve to match scientists' needs, issues regarding maintainability are of prime importance in PIPE. This paper describes the PIPE system and describes how issues in maintainability affected the knowledge representation used in PIPE to capture knowledge about the behavior of fittings

    Intelligent assistance in scientific data preparation

    Get PDF
    Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages). A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and run time estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks

    Book Reviews

    Get PDF

    Telling Your Story: Using Metrics to Display Your Value (H2)

    Full text link
    The American Bar Association, academic institutions, law firms, and governments are demanding more and more outcome-based performance. However, displaying these outcomes is difficult for law libraries. Law libraries possess an abundance of data, but determining which metrics will showcase your law library’s value and performance is difficult. Speakers from a law school, law firm, and court library will explain the different metrics they use to display their value to their stakeholders. After these short presentations, a “fishbowl” discussion will provide participants the chance to share and learn about different metrics and tools law libraries are using to best tell their story

    Deep Scattering Power Spectrum Features for Robust Speech Recognition

    Get PDF

    The CODATA-RDA Data Steward School

    Get PDF
    Given the expected increase in demand for Data Stewards and Data Stewardship skills it is clear that there is a need to develop training, education and CPD (continuous professional development) in this area. In this paper a brief introduction is provided to the origin of definitions of Data Stewardship. Also it notes the present tendency towards equivalence between Data Stewardship skills and FAIR principles. It then focuses on one specific training event – the pilot Data Stewardship strand of the CODATA-RDA Research Data Science schools that by the time of the IDCC meeting will have been held in Trieste in August 2019. The paper will discuss the overall curriculum for the pilot school, how it matches with the FAIR4S framework, and plans for getting feedback from the students. Finally, the paper discuss future plans for the school, in particular how to deepen the integration between the Data Stewardship strand with the Early Career Researcher strand. [This paper is a conference pre-print presented at IDCC 2020 after lightweight peer review.

    WID course enhancements in STEM: The impact of adding ‘writing circles’ and writing process pedagogy

    Get PDF
    This study reports on a quantitative assessment of enhancements to a Writing in the Disciplines course in Kinesiology. The assessment coded student writing produced in semesters before and after a Kinesiology course was enhanced with both iterated peer review groups and writing-process scaffolding. These enhancements were developed through a sustained partnership between WAC and disciplinary faculty. Analysis of the results revealed significantly higher scores in five Learning Outcomes developed to align with the Framework for Success in Postsecondary Writing (2011). These findings offer quantitative evidence that adding writing-process pedagogy and iterated peer review improves student outcomes in both writing and critical thinking
    corecore